18 research outputs found

    Predicting Short-term and Long-term HbA1c Response after Insulin Initiation in Patients with Type 2 Diabetes Mellitus using Machine Learning

    Get PDF
    AIM: To assess the potential of supervised machine learning techniques to identify clinical variables for predicting short-term and long-term glycated hemoglobin (HbA1c) response after insulin treatment initiation in patients with type 2 diabetes mellitus (T2DM). MATERIALS AND METHODS: We included patients with T2DM from the Groningen Initiative to ANalyze Type 2 diabetes Treatment (GIANTT) database who started insulin treatment between 2007-2013 with a minimum follow-up of 2 years. Short-term and long-term response were defined at 6 (± 2) and 24 (± 2) months after insulin initiation, respectively. Patients were defined as good responders if they had a decrease in HbA1c ≥ 5mmol/mol or reached the recommended level of HbA1c ≤ 53 mmol/mol. Twenty-four baseline clinical variables were used for the analysis and elastic net regularization technique was used for variables selection. The performance of three traditional machine learning algorithms was compared to predict short-term and long-term responses and the area under the receiver operator characteristic curve (AUC) was used to assess the performance of the prediction model. RESULTS: The elastic net regularization based generalized linear model, including baseline HbA1c and eGFR, correctly classified short-term and long-term HbA1c response after treatment initiation with an AUC (95% CI) = 0.80 (0.78 - 0.83) and 0.81 (0.79 - 0.84), respectively, and outperformed other machine learning algorithms. Using baseline HbA1c alone, an AUC = 0.71 (0.65 - 0.73) and 0.72 (0.66 - 0.75) was obtained for predicting short-term and long-term response, respectively. CONCLUSIONS: Machine-learning algorithm performed well in the prediction of an individual's short-term and long-term HbA1c response using baseline clinical variables. This article is protected by copyright. All rights reserved

    Automated tracking of level of consciousness and delirium in critical illness using deep learning

    Get PDF
    Over- and under-sedation are common in the ICU, and contribute to poor ICU outcomes including delirium. Behavioral assessments, such as Richmond Agitation-Sedation Scale (RASS) for monitoring levels of sedation and Confusion Assessment Method for the ICU (CAM-ICU) for detecting signs of delirium, are often used. As an alternative, brain monitoring with electroencephalography (EEG) has been proposed in the operating room, but is challenging to implement in ICU due to the differences between critical illness and elective surgery, as well as the duration of sedation. Here we present a deep learning model based on a combination of convolutional and recurrent neural networks that automatically tracks both the level of consciousness and delirium using frontal EEG signals in the ICU. For level of consciousness, the system achieves a median accuracy of 70% when allowing prediction to be within one RASS level difference across all patients, which is comparable or higher than the median technician-nurse agreement at 59%. For delirium, the system achieves an AUC of 0.80 with 69% sensitivity and 83% specificity at the optimal operating point. The results show it is feasible to continuously track level of consciousness and delirium in the ICU

    A metabolomics based molecular pathway analysis for how the SGLT2-inhibitor dapagliflozin may slow kidney function decline in patients with diabetes

    Get PDF
    Aim: To investigate which metabolic pathways are targeted by the sodium-glucose co-transporter-2 inhibitor dapagliflozin to explore the molecular processes involved in its renal protective effects. Methods: An unbiased mass spectrometry plasma metabolomics assay was performed on baseline and follow-up (week 12) samples from the EFFECT II trial in patients with type 2 diabetes with non-alcoholic fatty liver disease receiving dapagliflozin 10 mg/day (n = 19) or placebo (n = 6). Transcriptomic signatures from tubular compartments were identified from kidney biopsies collected from patients with diabetic kidney disease (DKD) (n = 17) and healthy controls (n = 30) from the European Renal cDNA Biobank. Serum metabolites that significantly changed after 12 weeks of dapagliflozin were mapped to a metabolite-protein interaction network. These proteins were then linked with intra-renal transcripts that were associated with DKD or estimated glomerular filtration rate (eGFR). The impacted metabolites and their protein-coding transcripts were analysed for enriched pathways. Results: Of all measured (n = 812) metabolites, 108 changed (P &lt; 0.05) during dapagliflozin treatment and 74 could be linked to 367 unique proteins/genes. Intra-renal mRNA expression analysis of the genes encoding the metabolite-associated proteins using kidney biopsies resulted in 105 genes that were significantly associated with eGFR in patients with DKD, and 135 genes that were differentially expressed between patients with DKD and controls. The combination of metabolites and transcripts identified four enriched pathways that were affected by dapagliflozin and associated with eGFR: glycine degradation (mitochondrial function), TCA cycle II (energy metabolism), L-carnitine biosynthesis (energy metabolism) and superpathway of citrulline metabolism (nitric oxide synthase and endothelial function). Conclusion: The observed molecular pathways targeted by dapagliflozin and associated with DKD suggest that modifying molecular processes related to energy metabolism, mitochondrial function and endothelial function may contribute to its renal protective effect.</p

    ADARRI:a novel method to detect spurious R-peaks in the electrocardiogram for heart rate variability analysis in the intensive care unit

    Get PDF
    We developed a simple and fully automated method for detecting artifacts in the R-R interval (RRI) time series of the ECG that is tailored to the intensive care unit (ICU) setting. From ECG recordings of 50 adult ICU-subjects we selected 60 epochs with valid R-peak detections and 60 epochs containing artifacts leading to missed or false positive R-peak detections. Next, we calculated the absolute value of the difference between two adjacent RRIs (adRRI), and obtained the empirical probability distributions of adRRI values for valid R-peaks and artifacts. From these, we calculated an optimal threshold for separating adRRI values arising from artifact versus non-artefactual data. We compared the performance of our method with the methods of Berntson and Clifford on the same data. We identified 257,458 R-peak detections, of which 235,644 (91.5%) were true detections and 21,814 (8.5%) arose from artifacts. Our method showed superior performance for detecting artifacts with sensitivity 100%, specificity 99%, precision 99%, positive likelihood ratio of 100 and negative likelihood ratio <0.001 compared to Berntson’s and Clifford’s method with a sensitivity, specificity, precision and positive and negative likelihood ratio of 99%, 78%, 82%, 4.5, 0.013 for Berntson’s method and 55%, 98%, 96%, 27.5, 0.460 for Clifford’s method, respectively. A novel algorithm using a patient-independent threshold derived from the distribution of adRRI values in ICU ECG data identifies artifacts accurately, and outperforms two other methods in common use. Furthermore, the threshold was calculated based on real data from critically ill patients and the algorithm is easy to implement

    Accurate detection of spontaneous seizures using a generalized linear model with external validation

    Get PDF
    Objective Seizure detection is a major facet of electroencephalography (EEG) analysis in neurocritical care, epilepsy diagnosis and management, and the instantiation of novel therapies such as closed-loop stimulation or optogenetic control of seizures. It is also of increased importance in high-throughput, robust, and reproducible pre-clinical research. However, seizure detectors are not widely relied upon in either clinical or research settings due to limited validation. In this study, we create a high-performance seizure-detection approach, validated in multiple data sets, with the intention that such a system could be available to users for multiple purposes. Methods We introduce a generalized linear model trained on 141 EEG signal features for classification of seizures in continuous EEG for two data sets. In the first (Focal Epilepsy) data set consisting of 16 rats with focal epilepsy, we collected 1012 spontaneous seizures over 3 months of 24/7 recording. We trained a generalized linear model on the 141 features representing 20 feature classes, including univariate and multivariate, linear and nonlinear, time, and frequency domains. We tested performance on multiple hold-out test data sets. We then used the trained model in a second (Multifocal Epilepsy) data set consisting of 96 rats with 2883 spontaneous multifocal seizures. Results From the Focal Epilepsy data set, we built a pooled classifier with an Area Under the Receiver Operating Characteristic (AUROC) of 0.995 and leave-one-out classifiers with an AUROC of 0.962. We validated our method within the independently constructed Multifocal Epilepsy data set, resulting in a pooled AUROC of 0.963. We separately validated a model trained exclusively on the Focal Epilepsy data set and tested on the held-out Multifocal Epilepsy data set with an AUROC of 0.890. Latency to detection was under 5 seconds for over 80% of seizures and under 12 seconds for over 99% of seizures. Significance This method achieves the highest performance published for seizure detection on multiple independent data sets. This method of seizure detection can be applied to automated EEG analysis pipelines as well as closed loop interventional approaches, and can be especially useful in the setting of research using animals in which there is an increased need for standardization and high-throughput analysis of large number of seizures

    Predicting Ordinal Level of Sedation from the Spectrogram of Electroencephalography

    Get PDF
    In Intensive Care Unit, the sedation level of patients is usually monitored by periodically assessing the behavioral response to stimuli. However, these clinical assessments are limited due to the disruption with patients' sleep and the noise of observing behaviors instead of the brain activity directly. Here we train a Gated Recurrent Unit using the spectrogram of electroencephalography (EEG) based on 166 mechanically ventilated patients to predict the Richmond Agitation-Sedation Score, scored as ordinal levels of -5, -4, ... up to 0. The model is able to predict 50% accurate with an error not larger than 1 level; and 80% accurate with an error not larger than 2 levels on hold-out testing patients. We show typical spectrograms in each sedation level and interpret the results based on the visualization of the gradient with respect to the spectrogram. Future improvements include utilizing the EEG waveforms since waveform patterns are clinically thought to be associated with sedation levels, as well as training patient-specific models

    Brain Monitoring of Sedation in the Intensive Care Unit Using a Recurrent Neural Network

    Get PDF
    Over and under-sedation are common in critically ill patients admitted to the Intensive Care Unit. Clinical assessments provide limited time resolution and are based on behavior rather than the brain itself. Existing brain monitors have been developed primarily for non-ICU settings. Here, we use a clinical dataset from 154 ICU patients in whom the Richmond Agitation-Sedation Score is assessed about every 2 hours. We develop a recurrent neural network (RNN) model to discriminate between deep vs. no sedation, trained end-to-end from raw EEG spectrograms without any feature extraction. We obtain an average area under the ROC of 0.8 on 10-fold cross validation across patients. Our RNN is able to provide reliable estimates of sedation levels consistently better compared to a feed-forward model with simple smoothing. Decomposing the prediction error in terms of sedatives reveals that patient-specific calibration for sedatives is expected to further improve sedation monitoring

    Neonatal seizure detection using atomic decomposition with a novel dictionary

    No full text
    Atomic decomposition (AD) can be used to efficiently decompose an arbitrary signal. In this paper, we present a method to detect neonatal electroencephalogram (EEG) seizure based on AD via orthogonal matching pursuit using a novel, application-specific, dictionary. The dictionary consists of pseudoperiodic Duffing oscillator atoms which are designed to be coherent with the seizure epochs. The relative structural complexity (a measure of the rate of convergence of AD) is used as the sole feature for seizure detection. The proposed feature was tested on a large clinical dataset of 826 h of EEG data from 18 full-term newborns with 1389 seizures. The seizure detection system using the proposed dictionary was able to achieve a median receiver operator characteristic area of 0.91 (IQR 0.87-0.95) across 18 neonates

    Novel drug-independent sedation level estimation based on machine learning of quantitative frontal electroencephalogram features in healthy volunteers

    Get PDF
    Background: Sedation indicators based on a single quantitative EEG (QEEG) feature have been criticised for their limited performance. We hypothesised that integration of multiple QEEG features into a single sedation-level estimator using a machine learning algorithm could reliably predict levels of sedation, independent of the sedative drug used. Methods: In total, 102 subjects receiving propofol (N =36 ; 16 male/20 female), sevoflurane (N =36 ; 16 male/20 female), or dexmedetomidine (N = 30 ; 15 male/15 female) were included in this study of healthy volunteers. Sedation level was assessed using the Modified Observer's Assessment of Alertness/Sedation (MOAA/S) score. We used 44 QEEG features estimated from the EEG data in a logistic regression algorithm, and an elastic-net regularisation method was used for feature selection. The area under the receiver operator characteristic curve (AUC) was used to assess the performance of the logistic regression model. Results: The performances obtained when the system was trained and tested as drug-dependent mode to distinguish between awake and sedated states (mean AUC [standard deviation]) were propofol=0.97 (0.03), sevoflurane=0.74 (0.25), and dexmedetomidine=0.77 (0.10). The drug-independent system resulted in mean AUC=0.83 (0.17) to discriminate between the awake and sedated states. Conclusions: The incorporation of large numbers of QEEG features and machine learning algorithms is feasible for next-generation monitors of sedation level. Different QEEG features were selected for propofol, sevoflurane, and dexmedetomidine groups, but the sedation-level estimator maintained a high performance for predicting MOANS independent of the drug used
    corecore